Skin chromaticity gamuts for illumination recovery

نویسندگان

  • Stuart Crichton
  • Jonas Pichat
  • Michal Mackiewicz
  • Gui Yun Tian
  • Anya C. Hurlbert
چکیده

Colour constancy algorithms range from image statisticsbased pixel intensity manipulation to gamut-mapping methods, and are generally independent of specific image contents. In previous work, we have demonstrated that natural polychromatic surfaces possess distinct chromatic signatures in conecontrast space that may be exploited for colour constancy, and that in human vision, colour constancy is improved for such objects. Here we set out to use the specific, recognisable, and ubiquitous content of human skin in colour images to drive a gamut mapping method for colour constancy. We characterise variations in the chromaticity gamut of varying types of, pre recognised, human skin (male, female; Caucasian, African, Asian) under varying illumination. We use a custom-built LED illuminator to produce daylight metamers, and a spectroradiometrically calibrated hyperspectral camera (Specim V10E) to acquire images and create a novel hyperspectral skin image database. We demonstrate that human skin gamuts in conecontrast space are characterised by a set of features that can be used to differentiate between similar illuminations, whose estimate can then be used to colour correct an image. Introduction The mechanisms of colour constancy built into human vision enable object colours to remain roughly constant across changes in illumination. RGB-based camera systems are not natively colour constant, recording different RGB triplets for the same object under different illuminations, and it therefore remains a goal for colour image processing to develop and implement constancy algorithms that mimic human perception in correcting for changes in illumination. Over the past 40 years a number of approaches have been used to tackle the challenge of colour constancy in both computer vision and photography, ranging from image statistics-based pixel intensity manipulation to gamut mapping methods [3]. Gamut mapping methods typically use the entire image regardless of image content, while “white-balance” methods typically use a reference surface selected on the basis only of its pixel intensity values. “Max-RGB” or “normalise-to-white” algorithms, for example, assume that the pixels of highest intensity in each channel represent the highest possible reflectance in that channel, and therefore, would be white under a white illumination. In-camera “white-balance” methods may also require the user to set an arbitrary colour correlated temperature (CCT) value (at times annotated with more natural language labels, such as ‘cloudy’). If the highest-intensity pixels do not correspond to white reflectances, or the actual illumination does not match the selected correlated colour temperature, the algorithm will not adequately reproduce human perception of the scene. The inadequacies in these methods are often most noticeable in the colours of the most familiar objects, such as people and their skin. Here we propose to use the information contained in the images of these specific familiar objects to estimate the illumination incident on them, and, under the single source assumption, on the entire scene. We use a modified gamut-mapping algorithm that operates on the chromaticity gamut of only a selected portion of the image. Forsyth’s original gamut-mapping algorithm [4] assumes that the canonical gamut of a given scene is known. The algorithm takes the chromaticity gamut of the scene under an unknown illumination and calculates the most likely mapping that transforms its convex hull into the convex hull of the canonical gamut. The selected transform is the one that satisfies all of the possible point to point transforms between the two gamuts. The approach is illustrated in Figure 1 below. Figure 1. Visualisation of Forsyth’s algorithm For scenes that contain a complete sampling of all possible natural surfaces, the canonical gamut is complete and unchanging, but for scenes that contain biased samplings, the canonical gamut may itself be unknown. Thus, like the greyworld and white-balance algorithms, the original gamutmapping algorithm falters when the image contents fail to match the assumptions. We suggest that knowledge of the actual image contents may therefore be useful in constraining gamut-mapping methods. Tominaga and Wandell [5] proposed a further development in the form of a gamut correlation algorithm for a specific set of natural images [6]. The algorithm compares incoming gamuts with memory gamuts (both in the RB color space) stored in an image database, choosing as a match the stored gamut which has the highest correlation to that of the test image. Figure 2, along with Equations 1-4, outline the correlation algorithm. The restrictions on the incoming images, and on the database itself eliminates the problem of mismatch between the canonical collection of surfaces and the image contents. For general use, though, gamut-mapping methods will nonetheless always face the challenge of constructing a database that adequately accounts for all possible materials, configurations and illuminations encountered in unbounded images. One possible solution would be to focus the database on one specific feature class, or object type, and build it comprehensively. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 R B RB gamut of a MacBeth colour checker under two illuminations 266 ©2012 Society for Imaging Science and Technology Figure 2. Depiction of the Tominaga-Wandell correlation algorithm Correlation; ri = (AIi/(√AIAi)) (1) Where; AIi = AI ∩ Ai (2) AI = Area of test image gamut (3) Ai = Area of illuminant reference gamut (4) Here we propose human skin as the object of focus, due to its recognisability and near ubiquitous nature in photographic images. Previous work in computer vision has extensively examined skin colour as an identifying feature for face detection and tracking [7,8] or for image classification [9], and in this context, there is a driving goal to find illuminationindependent representations of skin colour [e.g. 8]. Here, we do not propose a skin detection algorithm per se, but rather assume that skin detection has already been performed, for example, by a face detection algorithm, and that once identified, the characteristics of the observed skin colour may be used to drive a colour constancy algorithm for the entire image. A similar approach has been attempted in [10] and [11]. However, as with most previous work, these approaches are based in RGB or RGB derived spaces, e.g. HSV space [11] for scene classification. The use of general images in uncontrolled environments here leads to real problems in discovering the ground truth gamuts for the stated objects (skin, vegetation and sky-sea). Here we propose to analyse image data in a physiological cone-contrast space, reasoning that this space is likely to have been naturally optimised to encode skin colour variations. We also hypothesise that the human visual system may use skin colour to drive colour constancy, since rapid object-recognition processes are known to occur in human vision, which could mediate the initial step of identification and segmentation of skin areas [12]. We specifically explore the idea that the distribution of chromaticities, in a known skin sample, may provide sufficient information to estimate the unknown illumination on the scene, as in [13]. We start from our previous observation [1] that the chromaticity distributions of natural polychromatic surfaces such as fruits and vegetables form regular signatures in the physiological cone-contrast space. These signatures exhibit invariant features (e.g., hue angle) under changing illumination, which may mediate their constancy to the human visual system. The signatures also vary in predictable ways under changing illumination, suggesting that signatures of familiar objects may be exploited to recover information about an unknown illumination, and thereby aid colour constancy for unfamiliar objects under the same illumination. Empirically, we have shown that colour constancy is indeed improved for naturally chromatically variegated familiar objects, compared with chromatically uniform familiar or unfamiliar objects [2]. Here we observe that, in human vision, one’s own skin provides an omnipresent, familiar reference surface. We then aim to determine whether features of the skin chromaticity distributions, as represented in the physiological colour space used by the human visual system, contain sufficient information to estimate unknown illuminations. The idea that human skin may provide a reference surface for chromatic adaptation, and thereby mediate colour constancy, is not new [14]. Previous implementations of the idea have relied, though, on mean skin chromaticities only, effectively using the mean chromaticity to provide the white point against which other image chromaticities are referenced. Here we explore whether the additional information contained in the inherent spread of chromaticities of bare skin provides additional support for colour constancy. Outline of the method As previously mentioned, we examine the feasibility of illumination estimation from skin chromaticity gamuts on the assumption that skin regions have already been identified. The first step is to form a reference set of skin chromaticity gamuts. To obtain accurate representations of the chromaticity variations in cone-contrast space, and to obtain ground-truth information for both reflectance and illumination spatial variations, we use hyperspectral imaging to create a hand image database. We then characterize the chromaticity gamuts of each skin sample in terms of a set of features, creating a reference feature look up table (FLUT). Lastly, we devise and test two illumination estimation methods, explained in the cost function algorithm section, with a set of test images. These test images include some used to create the reference FLUT, as well as ‘blind’ images taken under illuminations not used in the creation of the FLUT, in order to test whether our set of features can be used to closely estimate and match (in terms of CCT) the unknown illumination in a scene to one of our known illuminants. The workflow is as follows: 1. The first stage is the creation of the FLUT, consisting of the aforementioned chromaticity gamut features across different illuminations for a range of skin types. As a single example, this would entail saving the differing gamut areas for the same patch of skin across a range of illuminations. 2. A new image, under an unknown illumination, is then received. The skin type is identified, and a section of skin is processed in order to produce its chromaticity gamut. 3. The features from this skin patch under unknown illumination are then compared to the features stored within the feature LUT, using the 0.03 0.035 0.04 0.045 0.05 0.055 0.06 -0.22 -0.2 -0.18 -0.16 -0.14 -0.12 -0.1

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comparison of skin color detection and tracking methods under varying illumination

When skin areas like faces and hands are imaged under natural environments their color appearance is frequently affected by variations in illumination intensity and chromaticity. In color-based skin tracking and detection, changing intensity is often dismissed either with the use of normalized, intensity-invariant color coordinates or by additionally modeling possible skin intensities. Chromati...

متن کامل

Adaptive skin color modeling using the skin locus for selecting training pixels

Techniques for color-based tracking of faces or hands often assume a static skin model yet skin color, as measured by a camera, can change when lighting changes. Therefore, for robust skin pixel detection, an adaptive skin color model must be employed. We demonstrate a chromaticity-based constraint to select training pixels in a scene for updating a dynamic skin color model under changing illum...

متن کامل

Constraining a statistical skin colour model to adapt to illumination changes

This paper investigates how accurately the covariance matrix of chromaticity distributions of facial skin might be modelled for different illumination colours using a physics-based approach. Results are presented using real image data taken under different illumination colours and from subjects with different shades of skin. The orientation of the eigenvectors of the modelled and measured covar...

متن کامل

Illumination Color and Intrinsic Surface Properties – Physics - Based Color Analyses from a Single Image –

A consistent color descriptor of an object is a significant requirement for many applications in computer vision. In the real world, unfortunately, the color appearances of objects are generally not consistent. It depends principally on two factors: illumination spectral power distribution (illumination color) and intrinsic surface properties. Consequently, to obtain objects’ consistent color d...

متن کامل

Twilight and daytime colors of the clear sky.

Digital image analysis of the cloudless sky's daytime and twilight chromaticities challenges some existing ideas about sky colors. First, although the observed colors of the clear daytime sky do lie near the blackbody locus, their meridional chromaticity curves may resemble it very little. Second, analyses of twilight colors show that their meridional chromaticity curves vary greatly, with some...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012